Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite Optimization

نویسندگان

  • Fabian Pedregosa
  • Rémi Leblond
  • Simon Lacoste-Julien
چکیده

Due to their simplicity and excellent performance, parallel asynchronous variants of stochastic gradient descent have become popular methods to solve a wide range of large-scale optimization problems on multi-core architectures. Yet, despite their practical success, support for nonsmooth objectives is still lacking, making them unsuitable for many problems of interest in machine learning, such as the Lasso, group Lasso or empirical risk minimization with convex constraints. In this work, we propose and analyze PROXASAGA, a fully asynchronous sparse method inspired by SAGA, a variance reduced incremental gradient algorithm. The proposed method is easy to implement and significantly outperforms the state of the art on several nonsmooth, large-scale problems. We prove that our method achieves a theoretical linear speedup with respect to the sequential version under assumptions on the sparsity of gradients and block-separability of the proximal term. Empirical benchmarks on a multi-core architecture illustrate practical speedups of up to 12x on a 20-core machine.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Scalable nonconvex inexact proximal splitting

We study a class of large-scale, nonsmooth, and nonconvex optimization problems. In particular, we focus on nonconvex problems with composite objectives. This class includes the extensively studied class of convex composite objective problems as a subclass. To solve composite nonconvex problems we introduce a powerful new framework based on asymptotically nonvanishing errors, avoiding the commo...

متن کامل

An efficient one-layer recurrent neural network for solving a class of nonsmooth optimization problems

Constrained optimization problems have a wide range of applications in science, economics, and engineering. In this paper, a neural network model is proposed to solve a class of nonsmooth constrained optimization problems with a nonsmooth convex objective function subject to nonlinear inequality and affine equality constraints. It is a one-layer non-penalty recurrent neural network based on the...

متن کامل

Scalable Semi-Supervised Learning over Networks using Nonsmooth Convex Optimization

We propose a scalable method for semi-supervised (transductive) learning from massive network-structured datasets. Our approach to semi-supervised learning is based on representing the underlying hypothesis as a graph signal with small total variation. Requiring a small total variation of the graph signal representing the underlying hypothesis corresponds to the central smoothness assumption th...

متن کامل

Mathematical Programming Manuscript No. Smooth Exact Penalty and Barrier Functions for Nonsmooth Optimization

For constrained nonsmooth optimization problems, continuously diierentiable penalty functions and barrier functions are given. They are proved exact in the sense that under some nondegeneracy assumption, local optimizers of a nonlinear program are also optimizers of the associated penalty or barrier function. This is achieved by augmenting the dimension of the program by a variable that control...

متن کامل

Sufficiency and duality for a nonsmooth vector optimization problem with generalized $alpha$-$d_{I}$-type-I univexity over cones‎

In this paper, using Clarke’s generalized directional derivative and dI-invexity we introduce new concepts of nonsmooth K-α-dI-invex and generalized type I univex functions over cones for a nonsmooth vector optimization problem with cone constraints. We obtain some sufficient optimality conditions and Mond-Weir type duality results under the foresaid generalized invexity and type I cone-univexi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017